Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Few-shot text classification method based on prompt learning
Bihui YU, Xingye CAI, Jingxuan WEI
Journal of Computer Applications    2023, 43 (9): 2735-2740.   DOI: 10.11772/j.issn.1001-9081.2022081295
Abstract731)   HTML51)    PDF (884KB)(749)       Save

Text classification tasks usually rely on sufficient labeled data. Concerning the over-fitting problem of classification models on samples with small size in low resource scenarios, a few-shot text classification method based on prompt learning called BERT-P-Tuning was proposed. Firstly, the pre-trained model BERT (Bidirectional Encoder Representations from Transformers) was used to learn the optimal prompt template from labeled samples. Then, the prompt template and vacancy were filled in each sample, and the text classification task was transformed into the cloze test task. Finally, the final labels were obtained by predicting the word with the highest probability of the vacant positions and combining the mapping relationship between it and labels. Experimental results on the short text classification tasks of public dataset FewCLUE show that the proposed method have significantly improved the evaluation indicators compared to the BERT fine-tuning based method. In specific, the proposed method has the accuracy and F1 score increased by 25.2 and 26.7 percentage points respectively on the binary classification task, and the proposed method has the accuracy and F1 score increased by 6.6 and 8.0 percentage points respectively on the multi-class classification task. Compared with the PET (Pattern Exploiting Training) method of constructing templates manually, the proposed method has the accuracy increased by 2.9 and 2.8 percentage points respectively on two tasks, and the F1 score increased by 4.4 and 4.2 percentage points respectively on two tasks. The above verifies the effectiveness of applying pre-trained model on few-shot tasks.

Table and Figures | Reference | Related Articles | Metrics